Amazon FSx for NetApp ONTAPのバックアップからリストアする際にSnapMirrorのグローバルスロットリングによる帯域制御は効くのか確認してみた

Amazon FSx for NetApp ONTAPのバックアップからリストアする際にSnapMirrorのグローバルスロットリングによる帯域制御は効くのか確認してみた

Amazon FSx for NetApp ONTAPのバックアップからリストアする際にSnapMirrorのグローバルスロットリングによる帯域制御は効かない
Clock Icon2024.01.15

FSxのバックアップからリストアする際に帯域制御をかけたい

こんにちは、のんピ(@non____97)です。

皆さんはAmazon FSx for NetApp ONTAP(以降FSxN)のバックアップからリストアする際に帯域制御をかけたいしたいなと思ったことはありますか? 私はあります。

FSxのバックアップからリストアする際、必ずSSDに書き込まれます。リストアする際にTiering PolicyをAllにしていたとしてもです。

そのため、SSDの空き容量が少ない状態で大容量のボリュームのバックアップからリストアをすると、SSDが圧迫されてしまいます。

できることなのであれば、キャパシティプールストレージへの階層化速度を見ながらリストア速度を調整したいです。

SnapMirrrorではグローバルスロットリングなどで帯域制御が可能です。

FSxのバックアップの裏側ではSnapMirror Cloudが動作していると認識しています。

SnapMirror Cloudで動いているのであれば、設定したグローバルスロットリングがFSxのバックアップからリストアする際に効いてくれるかもしれません。

実際に試してみたので紹介します。

いきなりまとめ

  • SnapMirrorのグローバルスロットリングはFSxのバックアップからリストアに影響しない
  • FSxのバックアップ機能を使う場合はSSDの空き容量に注意しよう

やってみた

テストファイルの作成

用意したボリュームにテストファイルを書き込みます。

$ sudo mount -t nfs svm-0ddaf812807b7cc6f.fs-0cdf890bf92643e74.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1
$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0ddaf812807b7cc6f.fs-0cdf890bf92643e74.fsx.us-east-1.amazonaws.com:/vol1 nfs4  244G  320K  244G   1% /mnt/fsxn/vol1

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/random_pattern_binary_block_16GiB bs=1M count=16384
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 116.685 s, 147 MB/s

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0ddaf812807b7cc6f.fs-0cdf890bf92643e74.fsx.us-east-1.amazonaws.com:/vol1 nfs4  244G   17G  227G   7% /mnt/fsxn/vol1

作成した16GiBのファイルを8つコピーします。

$ for i in {1..7}; do
  sudo cp /mnt/fsxn/vol1/random_pattern_binary_block_16GiB "/mnt/fsxn/vol1/random_pattern_binary_block_16GiB_copy_${i}"
done

$ ls -l /mnt/fsxn/vol1
total 134746208
-rw-r--r--. 1 root root 17179869184 Jan 15 05:51 random_pattern_binary_block_16GiB
-rw-r--r--. 1 root root 17179869184 Jan 15 05:55 random_pattern_binary_block_16GiB_copy_1
-rw-r--r--. 1 root root 17179869184 Jan 15 05:57 random_pattern_binary_block_16GiB_copy_2
-rw-r--r--. 1 root root 17179869184 Jan 15 05:59 random_pattern_binary_block_16GiB_copy_3
-rw-r--r--. 1 root root 17179869184 Jan 15 06:01 random_pattern_binary_block_16GiB_copy_4
-rw-r--r--. 1 root root 17179869184 Jan 15 06:03 random_pattern_binary_block_16GiB_copy_5
-rw-r--r--. 1 root root 17179869184 Jan 15 06:05 random_pattern_binary_block_16GiB_copy_6
-rw-r--r--. 1 root root 17179869184 Jan 15 06:07 random_pattern_binary_block_16GiB_copy_7

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0ddaf812807b7cc6f.fs-0cdf890bf92643e74.fsx.us-east-1.amazonaws.com:/vol1 nfs4  244G  130G  114G  54% /mnt/fsxn/vol1

書き込み後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> volume efficiency show -volume vol1 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               true          true            true                              true                            false

::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:00:31 Mon Jan 15 05:55:07 2024 Mon Jan 15 06:23:09 2024 26.25GB      38%             1017MB         129.9GB

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           129.9GB      14%
             Footprint in Performance Tier             2.56GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                        128GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          96.20MB       0%
           Deduplication                              96.20MB       0%
      Delayed Frees                                   667.5MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 131.4GB      14%

      Effective Total Footprint                       131.4GB      14%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0cdf890bf92643e74-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 128.8GB
                               Total Physical Used: 130.1GB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 128.8GB
Total Data Reduction Physical Used Without Snapshots: 130.1GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 128.8GB
Total Data Reduction Physical Used without snapshots and flexclones: 130.1GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 2.54GB
Total Physical Used in FabricPool Performance Tier: 3.27GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.54GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.27GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 128.8GB
               Physical Space Used for All Volumes: 128.8GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 130.1GB
              Physical Space Used by the Aggregate: 130.1GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 660KB
             Physical Size Used by Snapshot Copies: 272KB
              Snapshot Volume Data Reduction Ratio: 2.43:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.43:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     856.7GB   861.8GB 5.06GB   13.54GB       1%                    0B                          0%                                  0B                   129.1GB                      0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              4.39GB         0%
      Aggregate Metadata                            681.1MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    50.42GB         6%

      Total Physical Used                           13.54GB         1%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  129.1GB          -
      Logical Referenced Capacity                   128.5GB          -
      Logical Unreferenced Capacity                   639MB          -

      Total Physical Used                           129.1GB          -



2 entries were displayed.

Tiering PolicyをAllにしているのでほとんどのデータがキャパシティプールストレージに階層化されています。

また、その影響で重複排除は全くかかっていません。

この時のCloudWatchメトリクスから見たSSD、キャパシティプールストレージ、SSDとキャパシティプールストレージの合計は以下のとおりです。

バックアップ前のストレージの使用量

書き込みがおおよそ完了したタイミングで一気にキャパシティプールストレージに階層化されたことが分かります。

このように階層化の処理の優先度はファイルアクセスと比べて高くありません。そのため、大量の書き込みがあるとキャパシティプールストレージへの階層化が間に合わずSSDが圧迫されてしまいます。

バックアップ

ボリュームをバックアップします。

ボリュームのバックアップはマネジメントコンソールから行いました。

バックアップ完了までの時間を計測します。

$ backup_id=backup-0d836ccc64d611d16

$ while true; do
  date

  progress_percent=$(aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --query 'Backups[].ProgressPercent' \
    --output text \
    --region us-east-1
  )

  echo "Backup progress percent : ${progress_percent}"

  if [[ $progress_percent == 100 ]] ; then
    break
  else
    echo "-------------------"
  fi

  sleep 10
done
Mon Jan 15 06:29:09 AM UTC 2024
Backup progress percent : 6
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:31:04 AM UTC 2024
Backup progress percent : 23
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:33:07 AM UTC 2024
Backup progress percent : 41
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:35:10 AM UTC 2024
Backup progress percent : 58
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:37:24 AM UTC 2024
Backup progress percent : 78
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:39:26 AM UTC 2024
Backup progress percent : 96
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:42:03 AM UTC 2024
Backup progress percent : 99
-------------------
.
.
(中略)
.
.
-------------------
Mon Jan 15 06:44:29 AM UTC 2024
Backup progress percent : 100

$ aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --region us-east-1
{
    "Backups": [
        {
            "BackupId": "backup-0d836ccc64d611d16",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-15T06:27:35.314000+00:00",
            "KmsKeyId": "arn:aws:kms:us-east-1:<AWSアカウントID>:key/73e96c0a-aeb6-4813-aae6-1882c899d445",
            "ResourceARN": "arn:aws:fsx:us-east-1:<AWSアカウントID>:backup/backup-0d836ccc64d611d16",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0cdf890bf92643e74",
                "Lifecycle": "ACTIVE",
                "Name": "vol1",
                "OntapConfiguration": {
                    "JunctionPath": "/vol1",
                    "SizeInMegabytes": 262144,
                    "StorageEfficiencyEnabled": true,
                    "StorageVirtualMachineId": "svm-0ddaf812807b7cc6f",
                    "TieringPolicy": {
                        "Name": "ALL"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 274877906944
                },
                "ResourceARN": "arn:aws:fsx:us-east-1:<AWSアカウントID>:volume/fsvol-0e8323cdf515cdb1b",
                "VolumeId": "fsvol-0e8323cdf515cdb1b",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

17分ほどでバックアップが完了しました。

バックアップ取得時の管理アクティビティの監査ログは以下のとおりです。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Mon Jan 15 06:25:00 2024"
timestamp                  node                      application vserver                username          input                                                                                                                       state   message
-------------------------- ------------------------- ----------- ---------------------- ----------------- --------------------------------------------------------------------------------------------------------------------------- ------- -------
"Mon Jan 15 06:27:54 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"} Success -
"Mon Jan 15 06:27:54 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"amazon-fsx-ontap-backup-us-east-1-208334d3-b7b9d7e0:/objstore/0c000000-0214-9cf3-0000-00000077b424","uuid":"0c000000-0214-9cf3-0000-00000077b424"},"policy":{"name":"FSxPolicy"},"source":{"path":"svm:vol1"}}
                                                                                                                                                                                                                                      Success -
"Mon Jan 15 06:27:54 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships : uuid=3747254f-b36f-11ee-a258-15b04af0968a isv_name="AWS FSx"                           Success -
"Mon Jan 15 06:27:55 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/storage/volumes/558964f1-b362-11ee-a258-15b04af0968a/snapshots?return_records=true : {"name":"backup-0d836ccc64d611d16"}
                                                                                                                                                                                                                                      Success -
"Mon Jan 15 06:27:55 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"} Success -
"Mon Jan 15 06:27:55 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/3747254f-b36f-11ee-a258-15b04af0968a/transfers : isv_name="AWS FSx"                      Success -
"Mon Jan 15 06:27:55 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/3747254f-b36f-11ee-a258-15b04af0968a/transfers?return_records=true : {"source_snapshot":"backup-0d836ccc64d611d16"}
                                                                                                                                                                                                                                      Success -
7 entries were displayed.

バックアップ取得中にSnapMirrorの情報を確認しようとしましたが、確認できませんでした。

::*> snapmirror show
This table is currently empty.

::*> snapmirror show -instance
There are no entries matching your query.

::*> snapmirror list-destinations show
There are no entries matching your query.

バックアップ取得完了後のCloudWatchメトリクスから見たSSD、キャパシティプールストレージ、SSDとキャパシティプールストレージの合計は以下のとおりです。

バックアップ完了後のストレージの使用量

特に動きはありませんでした。

SnapMirrorのグローバルスロットリング変更

SnapMirrorのグローバルスロットリングを変更します。

Incomingの転送速度を4KiBに制限します。

::*> options

FsxId0cdf890bf92643e74
    replication.throttle.incoming.max_kbs
                                      -                    -
    replication.throttle.outgoing.max_kbs
                                      -                    -
2 entries were displayed.

::*> options -option-name replication.throttle.incoming.max_kbs 4096
1 entry was modified.

::*> options

FsxId0cdf890bf92643e74
    replication.throttle.incoming.max_kbs
                                      4096                 -
    replication.throttle.outgoing.max_kbs
                                      -                    -
2 entries were displayed.

リストア

マネジメントコンソールからリストアします。

ボリュームのリストア

リストアを開始して4分ほど経過すると、SnapMirrorについてのログが監査ログに記録されていました。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Mon Jan 15 06:57:00 2024"
timestamp                  node                      application vserver                username          input                                                                                                                                                                                                                                                                                                                                                                                       state   message
-------------------------- ------------------------- ----------- ---------------------- ----------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------- -------
"Mon Jan 15 07:01:35 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/storage/volumes/?return_records=true : {"comment":"FSx.tmp.fsvol-0a45d25b2d6540040.3880c77b-9b6e-4922-9f78-c77a81cdb7c3","language":"c.utf_8","name":"vol1_restored","size":274877906944,"tiering":{"policy":"ALL"},"type":"dp","aggregates":[{"name":"aggr1","uuid":"c001e19b-b361-11ee-a258-15b04af0968a"}],"svm":{"name":"svm","uuid":"23d2ef29-b362-11ee-a258-15b04af0968a"}} Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane PATCH /api/storage/volumes/ec120098-b373-11ee-a258-15b04af0968a : {"comment":""}                                                                                                                                                                                                                                                                                                            Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane system node run -node FsxId0cdf890bf92643e74-01 -command wafl obj_cache flush                                                                                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0cdf890bf92643e74-01 -command wafl obj_cache flush"}                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane system node run -node FsxId0cdf890bf92643e74-02 -command wafl obj_cache flush                                                                                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0cdf890bf92643e74-02 -command wafl obj_cache flush"}                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"svm:vol1_restored"},"restore":true,"source":{"path":"amazon-fsx-ontap-backup-us-east-1-208334d3-b7b9d7e0:/objstore/0c000000-0214-9cf3-0000-00000077b424_rst","uuid":"0c000000-0214-9cf3-0000-00000077b424"}}                                                                                              Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships : uuid=f2c36179-b373-11ee-a258-15b04af0968a isv_name="AWS FSx"                                                                                                                                                                                                                                                                                           Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/f2c36179-b373-11ee-a258-15b04af0968a/transfers : isv_name="AWS FSx"                                                                                                                                                                                                                                                                                      Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/f2c36179-b373-11ee-a258-15b04af0968a/transfers?return_records=true : {"source_snapshot":"backup-0d836ccc64d611d16"}                                                                                                                                                                                                                                      Success -
16 entries were displayed.

::*> date show
  (cluster date show)
Node      Date                      Time zone
--------- ------------------------- -------------------------
FsxId0cdf890bf92643e74-01
          1/15/2024 07:03:11 +00:00 Etc/UTC
FsxId0cdf890bf92643e74-02
          1/15/2024 07:03:11 +00:00 Etc/UTC
2 entries were displayed.

なお、SnapMirror relastionshipなどの情報はやはり確認できませんでした。

::*> snapmirror show
This table is currently empty.

::*> snapmirror list-destinations
This table is currently empty.

ボリュームの状態を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -
svm     vol1_restored
               256GB 226.2GB   256GB           256GB   29.84GB 11%          77.06MB            0%                         77.06MB             29.44GB       11%                   29.91GB      12%                  -                 29.91GB             all            -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           129.9GB      14%
             Footprint in Performance Tier             2.56GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                        128GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          96.20MB       0%
           Deduplication                              96.20MB       0%
      Delayed Frees                                   668.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 131.4GB      14%

      Effective Total Footprint                       131.4GB      14%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           29.87GB       3%
             Footprint in Performance Tier            25.61GB      86%
             Footprint in FSxFabricpoolObjectStore
                                                       4.28GB      14%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        214.9MB       0%
      Deduplication Metadata                            412KB       0%
           Temporary Deduplication                      412KB       0%
      Delayed Frees                                   22.21MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 30.10GB       3%

      Effective Total Footprint                       30.10GB       3%
2 entries were displayed.

数分しか経っていないにも関わらず30GB弱転送されています。Tiering PolicyはAllになっていますが、リストア速度に全く追いついていません。

さらに2分ほど待ちました。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -
svm     vol1_restored
               256GB 193.8GB   256GB           256GB   62.16GB 24%          77.06MB            0%                         77.06MB             61.98GB       24%                   62.24GB      24%                  -                 62.24GB             all            -                                   -
2 entries were displayed.

::*> date show
  (cluster date show)
Node      Date                      Time zone
--------- ------------------------- -------------------------
FsxId0cdf890bf92643e74-01
          1/15/2024 07:06:44 +00:00 Etc/UTC
FsxId0cdf890bf92643e74-02
          1/15/2024 07:06:44 +00:00 Etc/UTC
2 entries were displayed.

62GBもリストアしたボリュームに書き込まれています。これはグローバルスロットリングは全く効いていなさそうです。

効果はないと思いますが、グローバルスロットリングでOutgoingも4KiBに変更します。

::*> options -option-name replication.throttle.outgoing.max_kbs 4096
1 entry was modified.

::*> options

FsxId0cdf890bf92643e74
    replication.throttle.incoming.max_kbs
                                      4096                 -
    replication.throttle.outgoing.max_kbs
                                      4096                 -
2 entries were displayed.

変更して1分経ちました。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -
svm     vol1_restored
               256GB 149.5GB   256GB           256GB   106.5GB 41%          77.06MB            0%                         77.06MB             106.4GB       42%                   106.6GB      42%                  -                 106.6GB             all            -                                   -
2 entries were displayed.

::*> date show
  (cluster date show)                                                                                                                   Node      Date                      Time zone
--------- ------------------------- -------------------------
FsxId0cdf890bf92643e74-01
          1/15/2024 07:09:48 +00:00 Etc/UTC
FsxId0cdf890bf92643e74-02
          1/15/2024 07:09:48 +00:00 Etc/UTC
2 entries were displayed.

106GB転送されています。

本来、グローバルスロットリングを変更すると、すぐさま転送速度に反映されます。ただし、今回は転送速度に影響していなさそうです。

リストアを開始して15分経過すると、リストアされたボリュームのライフサイクルの状態が作成済みに変わりました。

この時の監査ログは以下のとおりです。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Mon Jan 15 06:57:00 2024"
timestamp                  node                      application vserver                username          input                                                                                                                                                                                                                                                                                                                                                                                       state   message
-------------------------- ------------------------- ----------- ---------------------- ----------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------- -------
"Mon Jan 15 07:01:35 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/storage/volumes/?return_records=true : {"comment":"FSx.tmp.fsvol-0a45d25b2d6540040.3880c77b-9b6e-4922-9f78-c77a81cdb7c3","language":"c.utf_8","name":"vol1_restored","size":274877906944,"tiering":{"policy":"ALL"},"type":"dp","aggregates":[{"name":"aggr1","uuid":"c001e19b-b361-11ee-a258-15b04af0968a"}],"svm":{"name":"svm","uuid":"23d2ef29-b362-11ee-a258-15b04af0968a"}} Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane PATCH /api/storage/volumes/ec120098-b373-11ee-a258-15b04af0968a : {"comment":""}                                                                                                                                                                                                                                                                                                            Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane system node run -node FsxId0cdf890bf92643e74-01 -command wafl obj_cache flush                                                                                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0cdf890bf92643e74-01 -command wafl obj_cache flush"}                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane system node run -node FsxId0cdf890bf92643e74-02 -command wafl obj_cache flush                                                                                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0cdf890bf92643e74-02 -command wafl obj_cache flush"}                                                                                                                                                                                                                                               Success -
"Mon Jan 15 07:01:46 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"svm:vol1_restored"},"restore":true,"source":{"path":"amazon-fsx-ontap-backup-us-east-1-208334d3-b7b9d7e0:/objstore/0c000000-0214-9cf3-0000-00000077b424_rst","uuid":"0c000000-0214-9cf3-0000-00000077b424"}}                                                                                              Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships : uuid=f2c36179-b373-11ee-a258-15b04af0968a isv_name="AWS FSx"                                                                                                                                                                                                                                                                                           Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/f2c36179-b373-11ee-a258-15b04af0968a/transfers : isv_name="AWS FSx"                                                                                                                                                                                                                                                                                      Success -
"Mon Jan 15 07:01:47 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/snapmirror/relationships/f2c36179-b373-11ee-a258-15b04af0968a/transfers?return_records=true : {"source_snapshot":"backup-0d836ccc64d611d16"}                                                                                                                                                                                                                                      Success -
"Mon Jan 15 07:07:55 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsxadmin          vserver options -option-name replication.throttle.outgoing.max_kbs 4096                                                                                                                                                                                                                                                                                                                     Success 1 entry was modified.
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm                                                                                                                                                                                                                                                                                                         Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm"}                                                                                                                                                                                                                                         Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane set -privilege diagnostic                                                                                                                                                                                                                                                                                                                                                                   Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabled false                                                                                                                                                                                                                                                                                     Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 ssh         FsxId0cdf890bf92643e74 fsx-control-plane Logging out                                                                                                                                                                                                                                                                                                                                                                                 Success -
"Mon Jan 15 07:13:06 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabled false"}                                                                                                                                                                                                                     Success -
"Mon Jan 15 07:13:07 2024" FsxId0cdf890bf92643e74-01 http        FsxId0cdf890bf92643e74 fsx-control-plane PATCH /api/storage/volumes/ec120098-b373-11ee-a258-15b04af0968a : {"tiering":{"policy":"ALL"},"nas":{"path":"/vol1_restored","security_style":"unix"},"snapshot_policy":{"name":"none"}}                                                                                                                                                                                                    Success -
26 entries were displayed.

リストア完了直後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:52:54 Mon Jan 15 05:55:07 2024 Mon Jan 15 06:23:09 2024 26.25GB      38%             1017MB         129.9GB
svm     vol1_restored
               Enabled 405928 KB (3%) Done
                                         Mon Jan 15 07:04:02 2024 Mon Jan 15 07:12:26 2024 0B           38%             1.25GB         129.8GB
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -
svm     vol1_restored
               256GB 126.3GB   256GB           256GB   129.7GB 50%          77.06MB            0%                         77.06MB             129.7GB       51%                   129.8GB      51%                  -                 129.8GB             all            -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           129.9GB      14%
             Footprint in Performance Tier             2.56GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                        128GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          96.20MB       0%
           Deduplication                              96.20MB       0%
      Delayed Frees                                   668.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 131.4GB      14%

      Effective Total Footprint                       131.4GB      14%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           129.7GB      14%
             Footprint in Performance Tier            60.88GB      47%
             Footprint in FSxFabricpoolObjectStore
                                                      68.95GB      53%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          178.8MB       0%
           Temporary Deduplication                    178.8MB       0%
      Delayed Frees                                   103.4MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 130.7GB      14%

      Effective Total Footprint                       130.7GB      14%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0cdf890bf92643e74-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 517.0GB
                               Total Physical Used: 290.0GB
                    Total Storage Efficiency Ratio: 1.78:1
Total Data Reduction Logical Used Without Snapshots: 257.3GB
Total Data Reduction Physical Used Without Snapshots: 290.0GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 257.3GB
Total Data Reduction Physical Used without snapshots and flexclones: 290.0GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 126.8GB
Total Physical Used in FabricPool Performance Tier: 95.60GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.33:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 63.42GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 95.60GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 257.3GB
               Physical Space Used for All Volumes: 257.2GB
               Space Saved by Volume Deduplication: 77.06MB
Space Saved by Volume Deduplication and pattern detection: 77.06MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 290.0GB
              Physical Space Used by the Aggregate: 290.0GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 259.7GB
             Physical Size Used by Snapshot Copies: 796KB
              Snapshot Volume Data Reduction Ratio: 342114.97:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 342114.97:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     793.9GB   861.8GB 67.86GB  105.9GB       12%                   0B                          0%                                  0B                   198.7GB                      0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             66.19GB         7%
      Aggregate Metadata                             1.66GB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    113.2GB        12%

      Total Physical Used                           105.9GB        12%


      Total Provisioned Space                         513GB        57%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  198.7GB          -
      Logical Referenced Capacity                   197.9GB          -
      Logical Unreferenced Capacity                 766.3MB          -

      Total Physical Used                           198.7GB          -



2 entries were displayed.

::*> date show
  (cluster date show)
Node      Date                      Time zone
--------- ------------------------- -------------------------
FsxId0cdf890bf92643e74-01
          1/15/2024 07:16:09 +00:00 Etc/UTC
FsxId0cdf890bf92643e74-02
          1/15/2024 07:16:09 +00:00 Etc/UTC
2 entries were displayed.

リストアしたボリュームはまだまだ階層化中です。

この時のCloudWatchメトリクスの様子は以下のとおりです。

リストア完了後のストレージ使用量

リストアがほぼ完了し、データ転送量が少なくなったタイミングで一気にキャパシティプールストレージに階層化されています。

5分ほど待つと階層化が完了しました。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state   progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 01:02:14 Mon Jan 15 05:55:07 2024 Mon Jan 15 06:23:09 2024 26.25GB      38%             1017MB         129.9GB
svm     vol1_restored
               Enabled 8841564 KB (81%) Done
                                         Mon Jan 15 07:04:02 2024 Mon Jan 15 07:12:26 2024 0B           38%             1.25GB         129.8GB
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   256GB 113.3GB   256GB           243.2GB 129.9GB 53%          0B                 0%                         0B                  129.9GB       51%                   129.9GB      53%                  -                 129.9GB             all            -                                   -
svm     vol1_restored
               256GB 125.8GB   256GB           256GB   130.2GB 50%          77.06MB            0%                         77.06MB             130.2GB       51%                   130.3GB      51%                  -                 129.8GB             all            -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           129.9GB      14%
             Footprint in Performance Tier             2.56GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                        128GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          96.20MB       0%
           Deduplication                              96.20MB       0%
      Delayed Frees                                   668.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 131.4GB      14%

      Effective Total Footprint                       131.4GB      14%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           130.2GB      14%
             Footprint in Performance Tier             2.44GB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      127.9GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        752.3MB       0%
      Deduplication Metadata                          178.8MB       0%
           Temporary Deduplication                    178.8MB       0%
      Delayed Frees                                   130.6MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 131.3GB      14%

      Effective Total Footprint                       131.3GB      14%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0cdf890bf92643e74-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 517.0GB
                               Total Physical Used: 260.6GB
                    Total Storage Efficiency Ratio: 1.98:1
Total Data Reduction Logical Used Without Snapshots: 257.3GB
Total Data Reduction Physical Used Without Snapshots: 260.1GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 257.3GB
Total Data Reduction Physical Used without snapshots and flexclones: 260.1GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 9.96GB
Total Physical Used in FabricPool Performance Tier: 7.00GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.42:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.98GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 7.00GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 257.3GB
               Physical Space Used for All Volumes: 257.2GB
               Space Saved by Volume Deduplication: 77.06MB
Space Saved by Volume Deduplication and pattern detection: 77.06MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 260.6GB
              Physical Space Used by the Aggregate: 260.6GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 259.7GB
             Physical Size Used by Snapshot Copies: 516.9MB
              Snapshot Volume Data Reduction Ratio: 514.49:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 514.49:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     852.7GB   861.8GB 9.02GB   17.18GB       2%                    0B                          0%                                  0B                   258.2GB                      0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              7.75GB         1%
      Aggregate Metadata                             1.26GB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    54.37GB         6%

      Total Physical Used                           17.18GB         2%


      Total Provisioned Space                         513GB        57%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  258.2GB          -
      Logical Referenced Capacity                   256.9GB          -
      Logical Unreferenced Capacity                  1.24GB          -

      Total Physical Used                           258.2GB          -



2 entries were displayed.

この時のCloudWatchメトリクスの様子は以下のとおりです。

階層化が完了した後のストレージ使用量

Amazon FSx for NetApp ONTAPのバックアップからリストアする際にSnapMirrorのグローバルスロットリングによる帯域制御は効かない

Amazon FSx for NetApp ONTAPのバックアップからリストアする際にSnapMirrorのグローバルスロットリングによる帯域制御は効くのか確認してみました。

検証したところ、SnapMirrorのグローバルスロットリングはFSxのバックアップからリストアに影響しないことが分かりました。

Tiering Policy AllにしているボリュームでSSDの空き容量に対して論理サイズが大きい場合は注意しましょう。

SSDを拡張するか、一時的に別のFSxNファイルシステムにリストアしてからSnapMirrorで転送することになると考えます。なお、SSDは一度拡張すると縮小することはできません。

それなのであれば最初からSnapMirrorでバックアップを管理する方が良いのではと最近常々思います。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.